Reliable and cost-effective counting of people in large indoor spaces is a significant challenge with many applications. An emerging approach is to deploy multiple fisheye cameras mounted overhead to monitor the whole space. However, due to the overlapping fields of view, person re-identificaiton (PRID) is critical for the accuracy of counting. While PRID has been thoroughly researched for traditional rectilinear cameras, few methods have been proposed for fisheye cameras and their performance is comparatively lower. To close this performance gap, we propose a multi-feature framework for fisheye PRID where we combine deep-learning, color-based and location-based features by means of novel feature fusion. We evaluate the performance of our framework for various feature combinations on FRIDA, a public fisheye PRID dataset. The results demonstrate that our multi-feature approach outperforms recent appearance-based deep-learning methods by almost 18% points and location-based methods by almost 3% points in accuracy.
translated by 谷歌翻译
无监督的对比度学习(UCL)是一种自我监督的学习技术,旨在通过将正面样本彼此接近,同时将负面样本推到嵌入空间中远处,以学习有用的表示功能。为了提高UCL的性能,几项作品引入了旨在选择“硬”阴性样本与UCL中使用的随机采样策略相比,旨在选择“硬”阴性样本的硬性阴性对比度学习(H-UCL)。在另一种方法中,在假设标签信息可用的假设下,有监督的对比学习(SCL)最近通过将UCL扩展到完全监督的环境来开发。在本文中,由于硬性采样策略在H-UCL中的有效性以及标签信息在SCL中的有用性的启发性,我们提出了一个称为硬性负责监督的对比度学习(H-SCL)的对比学习框架。我们的数值结果证明了H-SCL在几个图像数据集上对SCL和H-UCL的有效性。另外,从理论上讲,在某些条件下,H-SCL的目标函数可以受H-UCL的目标函数的界定,而不是由UCL的目标函数界定。因此,将H-UCL损失最小化可以作为最小化H-SCL损失的代理,而最小化UCL损失不能。正如我们数值表明H-SCL优于其他对比学习方法时,我们的理论结果(通过H-UCL损失界限H-SCL损失)有助于解释为什么H-UCL在实践中优于UCL。
translated by 谷歌翻译
在本文中,我们提出了一个新的领域概括(DG)框架,基于与看不见领域的风险的新上限。尤其是,我们的框架建议共同最大程度地减少可见域之间的协变量转移以及概念转移,从而在看不见的域上表现更好。虽然可以通过协变量和概念对准模块的任意组合来实施所提出的方法,但在这项工作中,我们使用良好的方法来分配一致性,即最大平均差异(MMD)和协方差比对(珊瑚)和使用,并使用不变的风险最小化(IRM)基于概念对齐的方法。我们的数值结果表明,所提出的方法在几个数据集上的域概括性要比最先进的方法执行或更好。
translated by 谷歌翻译
基于不变性的方法,例如不变风险最小化(IRM),最近已成为有前途的域泛化方法(DG)。尽管有希望的理论,但由于真正不变特征和虚假不变特征的混合,这种方法在共同的分类任务中失败。为了解决这个问题,我们提出了一个基于条件熵最小化(CEM)原理的框架,以滤除带有具有更好概括能力的新算法的虚假不变特征。我们表明,我们提出的方法与众所周知的信息瓶颈(IB)框架密切相关,并证明在某些假设下,熵最小化可以准确恢复真正的不变特征。与最近在几个DG数据集中的最新原理替代方案相比,我们的方法提供了竞争性的分类精度。
translated by 谷歌翻译
我们研究了针对无监督对比代表学习的硬消耗采样分布设计的问题。我们分析了一种新的MIN-MAX框架,寻求一种表示最小化所有联轴器的最大(最差情况)的广义对比学习损失(正面和阴性样本之间的关节分布)并证明所得的最小最大值代表性将是堕落的。这提供了在联轴器上结合额外的正则化约束的第一理论典范。我们通过最佳运输理论的镜头重新解释最小最大问题,并利用正则化的传输联轴来控制负例的硬度。我们证明最近提出的最先进的硬负面采样分布是对应于耦合熵正则化的特殊情况。
translated by 谷歌翻译
With the growing global deployment of carbon capture and sequestration technology to combat climate change, monitoring and detection of potential CO2 leakage through existing or storage induced faults are critical to the safe and long-term viability of the technology. Recent work on time-lapse seismic monitoring of CO2 storage has shown promising results in its ability to monitor the growth of the CO2 plume from surface recorded seismic data. However, due to the low sensitivity of seismic imaging to CO2 concentration, additional developments are required to efficiently interpret the seismic images for leakage. In this work, we introduce a binary classification of time-lapse seismic images to delineate CO2 plumes (leakage) using state-of-the-art deep learning models. Additionally, we localize the leakage region of CO2 plumes by leveraging Class Activation Mapping methods.
translated by 谷歌翻译
Explainable Artificial Intelligence (AI) in the form of an interpretable and semiautomatic approach to stage grading ocular pathologies such as Diabetic retinopathy, Hypertensive retinopathy, and other retinopathies on the backdrop of major systemic diseases. The experimental study aims to evaluate an explainable staged grading process without using deep Convolutional Neural Networks (CNNs) directly. Many current CNN-based deep neural networks used for diagnosing retinal disorders might have appreciable performance but fail to pinpoint the basis driving their decisions. To improve these decisions' transparency, we have proposed a clinician-in-the-loop assisted intelligent workflow that performs a retinal vascular assessment on the fundus images to derive quantifiable and descriptive parameters. The retinal vessel parameters meta-data serve as hyper-parameters for better interpretation and explainability of decisions. The semiautomatic methodology aims to have a federated approach to AI in healthcare applications with more inputs and interpretations from clinicians. The baseline process involved in the machine learning pipeline through image processing techniques for optic disc detection, vessel segmentation, and arteriole/venule identification.
translated by 谷歌翻译
Objective: We aim to develop an open-source natural language processing (NLP) package, SODA (i.e., SOcial DeterminAnts), with pre-trained transformer models to extract social determinants of health (SDoH) for cancer patients, examine the generalizability of SODA to a new disease domain (i.e., opioid use), and evaluate the extraction rate of SDoH using cancer populations. Methods: We identified SDoH categories and attributes and developed an SDoH corpus using clinical notes from a general cancer cohort. We compared four transformer-based NLP models to extract SDoH, examined the generalizability of NLP models to a cohort of patients prescribed with opioids, and explored customization strategies to improve performance. We applied the best NLP model to extract 19 categories of SDoH from the breast (n=7,971), lung (n=11,804), and colorectal cancer (n=6,240) cohorts. Results and Conclusion: We developed a corpus of 629 cancer patients notes with annotations of 13,193 SDoH concepts/attributes from 19 categories of SDoH. The Bidirectional Encoder Representations from Transformers (BERT) model achieved the best strict/lenient F1 scores of 0.9216 and 0.9441 for SDoH concept extraction, 0.9617 and 0.9626 for linking attributes to SDoH concepts. Fine-tuning the NLP models using new annotations from opioid use patients improved the strict/lenient F1 scores from 0.8172/0.8502 to 0.8312/0.8679. The extraction rates among 19 categories of SDoH varied greatly, where 10 SDoH could be extracted from >70% of cancer patients, but 9 SDoH had a low extraction rate (<70% of cancer patients). The SODA package with pre-trained transformer models is publicly available at https://github.com/uf-hobiinformatics-lab/SDoH_SODA.
translated by 谷歌翻译
To apply federated learning to drug discovery we developed a novel platform in the context of European Innovative Medicines Initiative (IMI) project MELLODDY (grant n{\deg}831472), which was comprised of 10 pharmaceutical companies, academic research labs, large industrial companies and startups. The MELLODDY platform was the first industry-scale platform to enable the creation of a global federated model for drug discovery without sharing the confidential data sets of the individual partners. The federated model was trained on the platform by aggregating the gradients of all contributing partners in a cryptographic, secure way following each training iteration. The platform was deployed on an Amazon Web Services (AWS) multi-account architecture running Kubernetes clusters in private subnets. Organisationally, the roles of the different partners were codified as different rights and permissions on the platform and administrated in a decentralized way. The MELLODDY platform generated new scientific discoveries which are described in a companion paper.
translated by 谷歌翻译
This work proposes Multi-task Meta Learning (MTML), integrating two learning paradigms Multi-Task Learning (MTL) and meta learning, to bring together the best of both worlds. In particular, it focuses simultaneous learning of multiple tasks, an element of MTL and promptly adapting to new tasks with fewer data, a quality of meta learning. It is important to highlight that we focus on heterogeneous tasks, which are of distinct kind, in contrast to typically considered homogeneous tasks (e.g., if all tasks are classification or if all tasks are regression tasks). The fundamental idea is to train a multi-task model, such that when an unseen task is introduced, it can learn in fewer steps whilst offering a performance at least as good as conventional single task learning on the new task or inclusion within the MTL. By conducting various experiments, we demonstrate this paradigm on two datasets and four tasks: NYU-v2 and the taskonomy dataset for which we perform semantic segmentation, depth estimation, surface normal estimation, and edge detection. MTML achieves state-of-the-art results for most of the tasks. Although semantic segmentation suffers quantitatively, our MTML method learns to identify segmentation classes absent in the pseudo labelled ground truth of the taskonomy dataset.
translated by 谷歌翻译